Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: experimental local ai model #254

Merged
merged 13 commits into from
Dec 4, 2024
Merged

feat: experimental local ai model #254

merged 13 commits into from
Dec 4, 2024

Conversation

janrtvld
Copy link
Contributor

@janrtvld janrtvld commented Dec 3, 2024

Performance is not that great yet. Will do some research on prompting techniques for smaller models, or we need to use the 3B model, that's also possible.

Still need to test on android, and decide on what devices we support. We can use expo-device to select the devices we want (can even filter on how much ram the device has)

@@ -117,6 +117,9 @@ const config = {
],
},
associatedDomains: associatedDomains.map((host) => `applinks:${host}`),
entitlements: {
'com.apple.developer.kernel.increased-memory-limit': true,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any thing we also need to configure in appstoreconnect?

<Heading variant="h1" fontWeight="$bold">
Settings
</Heading>
<>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is <> </> wrapper component is not used it seems?

}

try {
setResponse('') // This might be causing issues - let's move it
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

Copy link
Member

@TimoGlastra TimoGlastra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very nice 👍

@janrtvld
Copy link
Contributor Author

janrtvld commented Dec 4, 2024

Works on Android! It is however pretty slow, like a full minute before the result is in. I am using an older device though.

@janrtvld janrtvld marked this pull request as ready for review December 4, 2024 11:28
@janrtvld
Copy link
Contributor Author

janrtvld commented Dec 4, 2024

Performance is better, but still a bit meh. I guess we can still run with it as it's an experimental feature anyway. I don't really want to use the 3B model as it's even slower then and we'd have to make the requirements much higher.

It now works on devices with 4GB RAM and up, with higher end devices responding much quicker.

@janrtvld janrtvld merged commit 25aff70 into main Dec 4, 2024
1 check passed
@janrtvld janrtvld deleted the feat/local-ai branch December 4, 2024 13:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants